对文本生成的最新基于嵌入的评估指标的评估主要是基于衡量其与标准基准评估的相关性。但是,这些基准主要是从相似的域到用于浏览单词嵌入的域。这引起了人们对将基于嵌入的指标(缺乏)概括为新的和嘈杂的域的(缺乏)概括,这些指标包含与预处理数据不同的词汇。在本文中,我们研究了BertScore的鲁棒性,BertScore是文本生成最受欢迎的基于嵌入的指标之一。我们表明,(a)基于嵌入的度量与人类在标准基准上具有最高相关性的基于嵌入的度量,如果输入噪声或未知代币的量增加,则具有最低的相关性,(b)从预处理的第一层中嵌入的嵌入模型改善了所有指标的鲁棒性,并且(c)使用字符级嵌入式(而不是基于令牌的嵌入),从预科模型的第一层中实现了最高的鲁棒性。
translated by 谷歌翻译
激活功能可以对降低输入数据的拓扑复杂性产生重大影响,从而提高模型的性能。选择合适的激活函数是神经模型设计中的重要步骤。但是,在基于变压器的语言模型中很少讨论或探索激活功能的选择。事先选择它们的激活功能,然后从预训练中固定到微调。结果,在这个漫长的生命周期中,无法调整它们对模型的电感偏见。此外,随后开发的模型(例如Roberta,Bart和GPT-3)经常跟进先前的工作(例如BERT),以使用相同的激活函数而无需合理。在本文中,我们研究了变压器体系结构中使用理性激活函数(RAF)(RAF)的有效性。与常规,预定义的激活功能相反,RAF可以根据输入数据自适应地学习最佳激活功能。我们的实验表明,基于RAF的变压器(RAFT)比具有GELU函数的香草BERT的验证性更低。我们进一步评估了低和全数据设置中下游任务的筏。我们的结果表明,筏在大多数任务和设置上都优于对应模型。例如,在低数据表情况下(有100个训练示例),木筏在胶水基准上的表现平均高出5.71点,在全数据设置的小队中,平均得分为2.05分。对学到的RAF的形状的分析进一步揭示了它们在预训练模型的不同层之间有很大的变化,并且看起来与常规激活函数大多不同。 RAFT为根据学习的激活功能打开了一个新的研究方向,用于分析和解释预训练的模型。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
The COVID-19 pandemic has caused drastic alternations in human life in all aspects. The government's laws in this regard affected the lifestyle of all people. Due to this fact studying the sentiment of individuals is essential to be aware of the future impacts of the coming pandemics. To contribute to this aim, we proposed an NLP (Natural Language Processing) model to analyze open-text answers in a survey in Persian and detect positive and negative feelings of the people in Iran. In this study, a distilBert transformer model was applied to take on this task. We deployed three approaches to perform the comparison, and our best model could gain accuracy: 0.824, Precision: 0.824, Recall: 0.798, and F1 score: 0.804.
translated by 谷歌翻译
Several face de-identification methods have been proposed to preserve users' privacy by obscuring their faces. These methods, however, can degrade the quality of photos, and they usually do not preserve the utility of faces, e.g., their age, gender, pose, and facial expression. Recently, advanced generative adversarial network models, such as StyleGAN, have been proposed, which generate realistic, high-quality imaginary faces. In this paper, we investigate the use of StyleGAN in generating de-identified faces through style mixing, where the styles or features of the target face and an auxiliary face get mixed to generate a de-identified face that carries the utilities of the target face. We examined this de-identification method with respect to preserving utility and privacy, by implementing several face detection, verification, and identification attacks. Through extensive experiments and also comparing with two state-of-the-art face de-identification methods, we show that StyleGAN preserves the quality and utility of the faces much better than the other approaches and also by choosing the style mixing levels correctly, it can preserve the privacy of the faces much better than other methods.
translated by 谷歌翻译
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence. Although deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets. Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training. However, despite its substantial success on single sentence classification tasks where the challenge in making use of unlabeled data is to assign "good enough" pseudo-labels, for NLI tasks, the nature of unlabeled data is more complex: one of the sentences in the pair (usually the hypothesis) along with the class label are missing from the data and require human annotations, which makes SSL for NLI more challenging. In this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI where we use a conditional language model, BART to generate the hypotheses for the unlabeled sentences (used as premises). Our experiments show that our SSL framework successfully exploits unlabeled data and substantially improves the performance of four NLI datasets in low-resource settings. We release our code at: https://github.com/msadat3/SSL_for_NLI.
translated by 谷歌翻译
Automatic topic classification has been studied extensively to assist managing and indexing scientific documents in a digital collection. With the large number of topics being available in recent years, it has become necessary to arrange them in a hierarchy. Therefore, the automatic classification systems need to be able to classify the documents hierarchically. In addition, each paper is often assigned to more than one relevant topic. For example, a paper can be assigned to several topics in a hierarchy tree. In this paper, we introduce a new dataset for hierarchical multi-label text classification (HMLTC) of scientific papers called SciHTC, which contains 186,160 papers and 1,233 categories from the ACM CCS tree. We establish strong baselines for HMLTC and propose a multi-task learning approach for topic classification with keyword labeling as an auxiliary task. Our best model achieves a Macro-F1 score of 34.57% which shows that this dataset provides significant research opportunities on hierarchical scientific topic classification. We make our dataset and code available on Github.
translated by 谷歌翻译
尽管在许多控制任务中进行了大量的应用和深入的强化学习的成功,但它仍然存在许多关键问题和局限性,包括具有稀疏奖励的时间信用分配,缺乏有效的探索以及对对超级参数的脆弱融合,这对超级参与者非常敏感问题。持续控制中深厚的强化学习的问题以及进化算法在面对其中一些问题方面的成功,已经出现了进化增强学习的想法,这引起了许多争议。尽管在该领域的一些研究中取得了成功的结果,但针对这些问题及其局限性的适当解决方案尚待提出。本研究旨在研究进一步加强强化学习和进化计算的两个领域的效率,并朝着改善方法和现有挑战迈出一步。 “使用精英缓冲液的进化深度强化学习”算法通过互动学习能力和人脑中的假设结果的灵感引入了一种新的机制。在这种方法中,精英缓冲液的利用(这是受到人类思想的经验概括的启发),以及跨界和突变操作员的存在,以及连续一代的交互式学习,具有提高的效率,收敛性和收敛性,收敛性和在连续控制领域的正确进步。根据实验的结果,所提出的方法超过了具有高复杂性和维度的环境中的其他知名方法,并且在解决上述问题和局限性方面表现出色。
translated by 谷歌翻译
道路建设项目维护运输基础设施。这些项目的范围从短期(例如,重新铺面或固定坑洼)到长期(例如,添加肩膀或建造桥梁)。传统上,确定下一个建设项目是什么以及安排什么何时进行安排,这是通过人类使用特殊设备的检查来完成的。这种方法是昂贵且难以扩展的。另一种选择是使用计算方法来整合和分析多种过去和现在的时空数据以预测未来道路构建的位置和时间。本文报告了这种方法,该方法使用基于深神经网络的模型来预测未来的结构。我们的模型在由构造,天气,地图和道路网络数据组成的异质数据集上应用卷积和经常性组件。我们还报告了如何通过构建一个名为“美国建设”的大型数据集来解决我们如何解决足够的公开数据,其中包括620万个道路构造案例,并通过各种时空属性和路线网络功能增强,收集了。在2016年至2021年之间的连续美国(美国)中。使用对美国几个主要城市进行广泛的实验,我们显示了工作在准确预测未来建筑时的适用性 - 平均F1得分为0.85,准确性为82.2% - 这是52.2% - 胜过基线。此外,我们展示了我们的培训管道如何解决数据的空间稀疏性。
translated by 谷歌翻译
流程挖掘是一组技术,该技术被组织用于理解和改善其运营流程。设计任何流程重新设计程序的第一步是找到过程改进机会。在现有的工作中,通常假定在事先检测或易于检测到的有问题的过程实例集合中发生不良结果。因此,过程增强程序涉及在这些过程实例中找到根本原因和问题的处理。例如,有问题的实例集被视为具有异常值或值的值或大于该过程特征之一中给定阈值的值。但是,在各种情况下,使用这种方法,遗漏了许多流程增强机会,而不是这些有问题的过程实例所捕获的。为了克服这个问题,我们将找到过程增强区域作为上下文敏感的异常/异常检测问题。我们将过程增强区域定义为一组情况(过程实例或过程实例的前缀),其中过程性能令人惊讶。我们的目的是表征那些过程/结果/结果与在类似情况下的性能/结果明显不同的情况。为了评估拟议方法的有效性和相关性,我们已经对几个现实生活事件日志进行了实施和评估。
translated by 谷歌翻译